Pandas I - Series and DataFrame

Pandas introduces two new data structures to Python, both of which are built on top of NumPy (this means it's fast) :

  • Series : one-dimensional object akin to an observation/row in a dataset
  • DataFrame : tabular data structure akin to a database table

In [1]:
import pandas as pd
import numpy as np
pd.set_option('max_columns', 50)

Summary

  1. Series
    1.1 Creating
    1.2 Selecting
    1.3 Editing
    1.3 Mathematical Operations
    1.3 Missing Values
  2. DataFrame
    2.1 From Dictionnary of Lists
    2.2 From/To CSV
    2.3 From/To Excel
    2.4 From/To Database
    2.5 From Clipboard
    2.6 From URL
    2.7 From Google Analytics API
  3. Merge
    3.1 Inner Join (default)
    3.2 Left Outer Join
    3.3 Right Outer Join
    3.4 Full Outer Join
  4. Concatenate

1. Series

A Series is a one-dimensional object similar to an array, list, or column in a table.
It will assign a labeled index to each item in the Series. By default, each item will receive an index label from 0 to N, where N is the length of the Series minus one.

1.1 Creating


In [2]:
# create a Series with an arbitrary list
s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!'])
s


Out[2]:
0                7
1       Heisenberg
2             3.14
3      -1789710578
4    Happy Eating!
dtype: object

Alternatively, you can specify an index to use when creating the Series.


In [3]:
s = pd.Series([7, 'Heisenberg', 3.14, -1789710578, 'Happy Eating!'],
              index=['A', 'Z', 'C', 'Y', 'E'])
s


Out[3]:
A                7
Z       Heisenberg
C             3.14
Y      -1789710578
E    Happy Eating!
dtype: object

The Series constructor can convert a dictonary as well, using the keys of the dictionary as its index.


In [4]:
d = {'Chicago': 1000, 'New York': 1300, 'Portland': 900, 'San Francisco': 1100,
     'Austin': 450, 'Boston': None}
cities = pd.Series(d)
cities


Out[4]:
Austin            450
Boston            NaN
Chicago          1000
New York         1300
Portland          900
San Francisco    1100
dtype: float64

1.2 Selecting

You can use the index to select specific items from the Series ...


In [5]:
cities['Chicago']


Out[5]:
1000.0

In [6]:
cities[['Chicago', 'Portland', 'San Francisco']]


Out[6]:
Chicago          1000
Portland          900
San Francisco    1100
dtype: float64

Or you can use boolean indexing for selection.


In [7]:
cities[cities < 1000]


Out[7]:
Austin      450
Portland    900
dtype: float64

That last one might be a little weird, so let's make it more clear - cities < 1000 returns a Series of True/False values, which we then pass to our Series cities, returning the corresponding True items.


In [8]:
less_than_1000 = cities < 1000
print less_than_1000
print '\n'
print cities[less_than_1000]


Austin            True
Boston           False
Chicago          False
New York         False
Portland          True
San Francisco    False
dtype: bool


Austin      450
Portland    900
dtype: float64

1.3 Editing

You can also change the values in a Series on the fly.


In [9]:
# changing based on the index
print 'Old value:', cities['Chicago']
cities['Chicago'] = 1400
print 'New value:', cities['Chicago']


Old value: 1000.0
New value: 1400.0

In [10]:
# changing values using boolean logic
print cities[cities < 1000]
print '\n'
cities[cities < 1000] = 750

print cities[cities < 1000]


Austin      450
Portland    900
dtype: float64


Austin      750
Portland    750
dtype: float64

1.4 Mathematical Operations

Mathematical operations can be done using scalars and functions.


In [11]:
# divide city values by 3
cities / 3


Out[11]:
Austin           250.000000
Boston                  NaN
Chicago          466.666667
New York         433.333333
Portland         250.000000
San Francisco    366.666667
dtype: float64

In [12]:
# square city values
np.square(cities)


Out[12]:
Austin            562500
Boston               NaN
Chicago          1960000
New York         1690000
Portland          562500
San Francisco    1210000
dtype: float64

You can add two Series together, which returns a union of the two Series with the addition occurring on the shared index values. Values on either Series that did not have a shared index will produce a NULL/NaN (not a number).


In [13]:
print cities[['Chicago', 'New York', 'Portland']]
print'\n'
print cities[['Austin', 'New York']]
print'\n'
print cities[['Chicago', 'New York', 'Portland']] + cities[['Austin', 'New York']]


Chicago     1400
New York    1300
Portland     750
dtype: float64


Austin       750
New York    1300
dtype: float64


Austin       NaN
Chicago      NaN
New York    2600
Portland     NaN
dtype: float64

Notice that because Austin, Chicago, and Portland were not found in both Series, they were returned with NULL/NaN values.

5. Missing Values

What if you aren't sure whether an item is in the Series? You can check using idiomatic Python.


In [14]:
print 'Seattle' in cities
print 'San Francisco' in cities


False
True

NULL checking can be performed with isnull and notnull.


In [15]:
# returns a boolean series indicating which values aren't NULL
cities.notnull()


Out[15]:
Austin            True
Boston           False
Chicago           True
New York          True
Portland          True
San Francisco     True
dtype: bool

In [16]:
# use boolean logic to grab the NULL cities
print cities.isnull()
print '\n'
print cities[cities.isnull()]


Austin           False
Boston            True
Chicago          False
New York         False
Portland         False
San Francisco    False
dtype: bool


Boston   NaN
dtype: float64

2. DataFrame

A DataFrame is a tabular data structure comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can also think of a DataFrame as a group of Series objects (rows) that share an index (the column names).

2.1 From Dictionnary of Lists

To create a DataFrame out of common Python data structures, we can pass a dictionary of lists to the DataFrame constructor. Using the columns parameter allows us to tell the constructor how we'd like the columns ordered. By default, the DataFrame constructor will order the columns alphabetically (though this isn't the case when reading from a file - more on that next).


In [17]:
data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012],
        'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'],
        'wins': [11, 8, 10, 15, 11, 6, 10, 4],
        'losses': [5, 8, 6, 1, 5, 10, 6, 12]}
football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses'])
print football


   year     team  wins  losses
0  2010    Bears    11       5
1  2011    Bears     8       8
2  2012    Bears    10       6
3  2011  Packers    15       1
4  2012  Packers    11       5
5  2010    Lions     6      10
6  2011    Lions    10       6
7  2012    Lions     4      12

Much more often, you'll have a dataset you want to read into a DataFrame. Let's go through several common ways of doing so.

2.2 From/To CSV

Reading a CSV is as simple as calling the read_csv() function. By default, the read_csv() function expects the column separator to be a comma, but you can change that using the sep parameter.


In [18]:
%cd ~/Dropbox/tutorials/pandas/


[Errno 2] No such file or directory: '/Users/romainlepert/Dropbox/tutorials/pandas/'
/Users/romainlepert/Programming/Jupyter Notebook/Pandas

In [19]:
# Source: baseball-reference.com/players/r/riverma01.shtml
!head -n 5 data/mariano-rivera.csv


Year,Age,Tm,Lg,W,L,W-L%,ERA,G,GS,GF,CG,SHO,SV,IP,H,R,ER,HR,BB,IBB,SO,HBP,BK,WP,BF,ERA+,WHIP,H/9,HR/9,BB/9,SO/9,SO/BB,Awards
1995,25,NYY,AL,5,3,.625,5.51,19,10,2,0,0,0,67.0,71,43,41,11,30,0,51,2,1,0,301,84,1.507,9.5,1.5,4.0,6.9,1.70,
1996,26,NYY,AL,8,3,.727,2.09,61,0,14,0,0,5,107.2,73,25,25,1,34,3,130,2,0,1,425,240,0.994,6.1,0.1,2.8,10.9,3.82,CYA-3MVP-12
1997,27,NYY,AL,6,4,.600,1.88,66,0,56,0,0,43,71.2,65,17,15,5,20,6,68,0,0,2,301,239,1.186,8.2,0.6,2.5,8.5,3.40,ASMVP-25
1998,28,NYY,AL,3,0,1.000,1.91,54,0,49,0,0,36,61.1,48,13,13,3,17,1,36,1,0,0,246,233,1.060,7.0,0.4,2.5,5.3,2.12,

In [20]:
from_csv = pd.read_csv('data/mariano-rivera.csv')
from_csv.head()


Out[20]:
Year Age Tm Lg W L W-L% ERA G GS GF CG SHO SV IP H R ER HR BB IBB SO HBP BK WP BF ERA+ WHIP H/9 HR/9 BB/9 SO/9 SO/BB Awards
0 1995 25 NYY AL 5 3 0.625 5.51 19 10 2 0 0 0 67.0 71 43 41 11 30 0 51 2 1 0 301 84 1.507 9.5 1.5 4.0 6.9 1.70 NaN
1 1996 26 NYY AL 8 3 0.727 2.09 61 0 14 0 0 5 107.2 73 25 25 1 34 3 130 2 0 1 425 240 0.994 6.1 0.1 2.8 10.9 3.82 CYA-3MVP-12
2 1997 27 NYY AL 6 4 0.600 1.88 66 0 56 0 0 43 71.2 65 17 15 5 20 6 68 0 0 2 301 239 1.186 8.2 0.6 2.5 8.5 3.40 ASMVP-25
3 1998 28 NYY AL 3 0 1.000 1.91 54 0 49 0 0 36 61.1 48 13 13 3 17 1 36 1 0 0 246 233 1.060 7.0 0.4 2.5 5.3 2.12 NaN
4 1999 29 NYY AL 4 3 0.571 1.83 66 0 63 0 0 45 69.0 43 15 14 2 18 3 52 3 1 2 268 257 0.884 5.6 0.3 2.3 6.8 2.89 ASCYA-3MVP-14

Our file had headers, which the function inferred upon reading in the file. Had we wanted to be more explicit, we could have passed header=None to the function along with a list of column names to use:


In [21]:
# command line : read head of file
# Source: pro-football-reference.com/players/M/MannPe00/touchdowns/passing/2012/
!head -n 5 data/peyton-passing-TDs-2012.csv


1,1,2012-09-09,DEN,,PIT,W 31-19,3,71,Demaryius Thomas,Trail 7-13,Lead 14-13*
2,1,2012-09-09,DEN,,PIT,W 31-19,4,1,Jacob Tamme,Trail 14-19,Lead 22-19*
3,2,2012-09-17,DEN,@,ATL,L 21-27,2,17,Demaryius Thomas,Trail 0-20,Trail 7-20
4,3,2012-09-23,DEN,,HOU,L 25-31,4,38,Brandon Stokley,Trail 11-31,Trail 18-31
5,3,2012-09-23,DEN,,HOU,L 25-31,4,6,Joel Dreessen,Trail 18-31,Trail 25-31

In [22]:
cols = ['num', 'game', 'date', 'team', 'home_away', 'opponent',
        'result', 'quarter', 'distance', 'receiver', 'score_before',
        'score_after']
no_headers = pd.read_csv('data/peyton-passing-TDs-2012.csv', sep=',', header=None,
                         names=cols)
no_headers.head()


Out[22]:
num game date team home_away opponent result quarter distance receiver score_before score_after
0 1 1 2012-09-09 DEN NaN PIT W 31-19 3 71 Demaryius Thomas Trail 7-13 Lead 14-13*
1 2 1 2012-09-09 DEN NaN PIT W 31-19 4 1 Jacob Tamme Trail 14-19 Lead 22-19*
2 3 2 2012-09-17 DEN @ ATL L 21-27 2 17 Demaryius Thomas Trail 0-20 Trail 7-20
3 4 3 2012-09-23 DEN NaN HOU L 25-31 4 38 Brandon Stokley Trail 11-31 Trail 18-31
4 5 3 2012-09-23 DEN NaN HOU L 25-31 4 6 Joel Dreessen Trail 18-31 Trail 25-31

pandas various reader functions have many parameters allowing you to do things like skipping lines of the file, parsing dates, or specifying how to handle NA/NULL datapoints.

Writing to CSV

There's also a set of writer functions for writing to a variety of formats (CSVs, HTML tables, JSON). They function exactly as you'd expect and are typically called to_format:

my_dataframe.to_csv('path_to_file.csv')

Take a look at the IO documentation to familiarize yourself with file reading/writing functionality.

2.3 From/To Excel

Know who hates VBA? Me. I bet you do, too. Thankfully, pandas allows you to read and write Excel files, so you can easily read from Excel, write your code in Python, and then write back out to Excel - no need for VBA.

Reading Excel files requires the xlrd library. You can install it via pip (pip install xlrd).

Let's first write a DataFrame to Excel.


In [23]:
# this is the DataFrame we created from a dictionary earlier
print football.head()


   year     team  wins  losses
0  2010    Bears    11       5
1  2011    Bears     8       8
2  2012    Bears    10       6
3  2011  Packers    15       1
4  2012  Packers    11       5

In [24]:
# since our index on the football DataFrame is meaningless, let's not write it
football.to_excel('data/football.xlsx', index=False)

In [25]:
# command line : list .xlsx files
!ls -l data/*.xlsx


-rw-r--r--  1 romainlepert  staff  5589 25 sep 14:20 data/football.xlsx

In [26]:
# delete the DataFrame
del football

In [27]:
# read from Excel
football = pd.read_excel('data/football.xlsx')
print football


   year     team  wins  losses
0  2010    Bears    11       5
1  2011    Bears     8       8
2  2012    Bears    10       6
3  2011  Packers    15       1
4  2012  Packers    11       5
5  2010    Lions     6      10
6  2011    Lions    10       6
7  2012    Lions     4      12

2.4 From/To Database

pandas also has some support for reading/writing DataFrames directly from/to a database [docs]. You'll typically just need to pass a connection object to the read_frame or write_frame functions within the pandas.io module.

Note that write_frame executes as a series of INSERT INTO statements and thus trades speed for simplicity. If you're writing a large DataFrame to a database, it might be quicker to write the DataFrame to CSV and load that directly using the database's file import arguments.


In [28]:
from pandas.io import sql
import sqlite3

conn = sqlite3.connect('/Users/greda/Dropbox/gregreda.com/_code/towed')
query = "SELECT * FROM towed WHERE make = 'FORD';"

results = sql.read_frame(query, con=conn)
print results.head()


---------------------------------------------------------------------------
OperationalError                          Traceback (most recent call last)
<ipython-input-28-cecd5634d8e0> in <module>()
      2 import sqlite3
      3 
----> 4 conn = sqlite3.connect('/Users/greda/Dropbox/gregreda.com/_code/towed')
      5 query = "SELECT * FROM towed WHERE make = 'FORD';"
      6 

OperationalError: unable to open database file

2.5 From Clipboard

While the results of a query can be read directly into a DataFrame, I prefer to read the results directly from the clipboard. I'm often tweaking queries in my SQL client (Sequel Pro), so I would rather see the results before I read it into pandas. Once I'm confident I have the data I want, then I'll read it into a DataFrame.

This works just as well with any type of delimited data you've copied to your clipboard. The function does a good job of inferring the delimiter, but you can also use the sep parameter to be explicit.

Hank Aaron


In [29]:
hank = pd.read_clipboard()
hank.head()


Out[29]:
both of which are built on top of.1 [NumPy](http://www.numpy.org/) (this means it's fast)

2.6 From URL

We can also use the Python's StringIO library to read data directly from a URL. StringIO allows you to treat a string as a file-like object.

Let's use the best sandwiches data that I wrote about scraping a while back.


In [30]:
from urllib2 import urlopen
from StringIO import StringIO

# store the text from the URL response in our url variable
url = urlopen('https://raw.github.com/gjreda/best-sandwiches/master/data/best-sandwiches-geocode.tsv').read()

# treat the tab-separated text as a file with StringIO and read it into a DataFrame
from_url = pd.read_table(StringIO(url), sep='\t')
from_url.head(3)


Out[30]:
rank sandwich restaurant description price address city phone website full_address formatted_address lat lng
0 1 BLT Old Oak Tap The B is applewood smoked&mdash;nice and snapp... $10 2109 W. Chicago Ave. Chicago 773-772-0406 theoldoaktap.com 2109 W. Chicago Ave., Chicago 2109 West Chicago Avenue, Chicago, IL 60622, USA 41.895734 -87.679960
1 2 Fried Bologna Au Cheval Thought your bologna-eating days had retired w... $9 800 W. Randolph St. Chicago 312-929-4580 aucheval.tumblr.com 800 W. Randolph St., Chicago 800 West Randolph Street, Chicago, IL 60607, USA 41.884672 -87.647754
2 3 Woodland Mushroom Xoco Leave it to Rick Bayless and crew to come up w... $9.50. 445 N. Clark St. Chicago 312-334-3688 rickbayless.com 445 N. Clark St., Chicago 445 North Clark Street, Chicago, IL 60654, USA 41.890602 -87.630925

2.7 From Google Analytics API

pandas also has some integration with the Google Analytics API, though there is some setup required. I won't be covering it, but you can read more about it here and here.

3. Merge

Use the pandas.merge() static method to merge/join datasets in a relational manner. (See DOC)
Like SQL's JOIN clause, pandas.merge() allows two DataFrames to be joined on one or more keys.

  • parameter how : specify which keys are to be included in the resulting table
  • parameters on, left_on, right_on, left_index, right_index : to specify the columns or indexes on which to join.

how : {"inner", "left", "right", "outer"}

  • "left" : use keys from left frame only
  • "right" : use keys from right frame only
  • "inner" (default) : use intersection of keys from both frames
  • "outer" : use union of keys from both frames

There are several cases to consider which are very important to understand:

  • one-to-one joins: to define these relationships, only one table is necessary (no join)
    • one user has one phone number
    • one phone number belongs to one user
  • one-to-many joins: to define these relationships, two tables are necessary
    • one post has many comments
    • one comment belongs to one post
      • merge(left, right, on=['key'], how='?')
  • many-to-many joins: to define these relationships, three tables are necessary
    • one playlist has many songs
    • one song belongs to many playlists
      • merge(left.reset_index(), right.reset_index(), on=['key'], how='?').set_index(['key_left','key_right'])

Below are the different joins in SQL.


In [31]:
left = pd.DataFrame({'key': range(5), 
                           'left_value': ['L0', 'L1', 'L2', 'L3', 'L4']})
right = pd.DataFrame({'key': range(2, 7), 
                           'right_value': ['R0', 'R1', 'R2', 'R3', 'R4']})
print left, '\n'
print right


   key left_value
0    0         L0
1    1         L1
2    2         L2
3    3         L3
4    4         L4 

   key right_value
0    2          R0
1    3          R1
2    4          R2
3    5          R3
4    6          R4

3.1 Inner Join (default)

Selects the rows from both tables with matching keys.


In [32]:
print pd.merge(left, right, on='key', how='inner')


   key left_value right_value
0    2         L2          R0
1    3         L3          R1
2    4         L4          R2
  • If our key columns had different names, we could have used the left_on and right_on parameters to specify which fields to join from each frame.
    pd.merge(left, right, left_on='left_key', right_on='right_key')
    
  • If our key columns were indexes, we could use the left_index or right_index parameters to specify to use the index column, with a True/False value. You can mix and match columns and indexes like so:
    pd.merge(left, right, left_on='key', right_index=True)
    

3.2 Left Outer Join

Returns all rows from the left frame, with the matching rows in the right frame. The result is NULL in the right side when there is no match (NaN).


In [33]:
print pd.merge(left, right, on='key', how='left')


   key left_value right_value
0    0         L0         NaN
1    1         L1         NaN
2    2         L2          R0
3    3         L3          R1
4    4         L4          R2

3.3 Right Outer Join

Returns all rows from the right frame, with the matching rows in the left frame. The result is NULL in the left side when there is no match (NaN).


In [34]:
print pd.merge(left, right, on='key', how='right')


   key left_value right_value
0    2         L2          R0
1    3         L3          R1
2    4         L4          R2
3    5        NaN          R3
4    6        NaN          R4

3.4 Full Outer Join

Combines the result of both Left Outer Join et Right Outer Join.


In [35]:
print pd.merge(left, right, on='key', how='outer')


   key left_value right_value
0    0         L0         NaN
1    1         L1         NaN
2    2         L2          R0
3    3         L3          R1
4    4         L4          R2
5    5        NaN          R3
6    6        NaN          R4

4. Concatenate

Use pandas .concat() static method to combine Series/DataFrames into one unified object. (See DOC)

pandas.concat() takes a list of Series or DataFrames and returns a Series or DataFrame of the concatenated objects. Note that because the function takes list, you can combine many objects at once.

Use axis parameter to define along which axis to concatenate:

axis = 0 : concatenate vertically (default)
axis = 1 : concatenante side-by-side


In [36]:
pd.concat([left, right], axis=1)


Out[36]:
key left_value key right_value
0 0 L0 2 R0
1 1 L1 3 R1
2 2 L2 4 R2
3 3 L3 5 R3
4 4 L4 6 R4